Goto

Collaborating Authors

 future revision




Export Reviews, Discussions, Author Feedback and Meta-Reviews

Neural Information Processing Systems

First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. Quality and originality: The main claims of the authors make intuitive sense. Specifically, figure 1 presents a generative model which separates note onsets from activation and spectral information. This is in keeping with the physics of a piano, where a pianist initiates a note onset by sending the hammer in free-flight. Those harmonics change over time based both on decay and on the piano's physical dampers.


Export Reviews, Discussions, Author Feedback and Meta-Reviews

Neural Information Processing Systems

First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. The key advancement in the model is the concept of subpopulation. For fitting the model, they propose sophisticated initialization procedures and compares methods. In other words, why is the matrix A not block diagonal? Wouldn't it allow any mixture factor model to be represented as well?


We further evaluate our model on five UCI

Neural Information Processing Systems

We thank all the reviewers for the valuable comments and suggestions. Feature normalization is applied in the experiments. MLP with one hidden layer of 50 units. We appreciate the suggestions on writing and are to fix them in the future revision. We acknowledge over-parameterization may fit some real applications better under certain scenarios.


acceptance, and noted clear presentation, strong empirical results, and thorough ablations

Neural Information Processing Systems

We thank the reviewers for their thoughtful feedback. R1 questioned whether BigBiGAN truly represents an "important step forward toward answering some fundamental On R1's specific concern that we only demonstrate "good linear probe classification performance", we now have We haven't experimented with smaller datasets (e.g. R1 suggested adding comparisons to other methods, including results published after the submission deadline. Finally, R1 asked for comparisons with VQ-V AE-2 on generation and representation learning. BigBiGAN is trained unconditionally (without class information), while VQ-V AE-2 is class-conditional, so it's not VQ-V AE-2 did not report representation learning results (unsupervised or otherwise).